{
"cells": [
{
"cell_type": "markdown",
"id": "096007d3-96ef-4c1a-abfa-0907a31dcef7",
"metadata": {},
"source": [
"# Homework 4\n",
"\n",
"In this homework you will get experience with Q-learning applied to some classic domains from the early literature on reinforcement learning. You'll implement tabular Q-learning, in which the states and actions must be discrete. The underlying domains have discrete action spaces but continuous observation spaces. I've provided code that will convert continuous observations into discrete ones. In a later homework we'll use neural networks to solve these same problems without the need to discretize."
]
},
{
"cell_type": "markdown",
"id": "838aaad0-a5ca-405b-9c2f-4d238f45a8b1",
"metadata": {},
"source": [
"## Task 1: Set up your environment\n",
"\n",
"There is nothing to turn in for this task.\n",
"\n",
"You'll need to pip install the following packages:\n",
"* gymnasium[classic-control] - a collection of RL domains\n",
"* tqdm - a tool for monitoring the progress of loops that run a long time\n",
"* numpy - a collection of useful tools for \"mathy\" things\n",
"* matplotlib - a collection of plotting utilities"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "7841f294",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: gymnasium[classic-control] in ./hwk4/lib/python3.11/site-packages (1.0.0)\n",
"Requirement already satisfied: numpy>=1.21.0 in ./hwk4/lib/python3.11/site-packages (from gymnasium[classic-control]) (2.1.2)\n",
"Requirement already satisfied: cloudpickle>=1.2.0 in ./hwk4/lib/python3.11/site-packages (from gymnasium[classic-control]) (3.1.0)\n",
"Requirement already satisfied: typing-extensions>=4.3.0 in ./hwk4/lib/python3.11/site-packages (from gymnasium[classic-control]) (4.12.2)\n",
"Requirement already satisfied: farama-notifications>=0.0.1 in ./hwk4/lib/python3.11/site-packages (from gymnasium[classic-control]) (0.0.4)\n",
"Requirement already satisfied: pygame>=2.1.3 in ./hwk4/lib/python3.11/site-packages (from gymnasium[classic-control]) (2.6.1)\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.1.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.2\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Requirement already satisfied: gymnasium[box2d] in ./hwk4/lib/python3.11/site-packages (1.0.0)\n",
"Requirement already satisfied: numpy>=1.21.0 in ./hwk4/lib/python3.11/site-packages (from gymnasium[box2d]) (2.1.2)\n",
"Requirement already satisfied: cloudpickle>=1.2.0 in ./hwk4/lib/python3.11/site-packages (from gymnasium[box2d]) (3.1.0)\n",
"Requirement already satisfied: typing-extensions>=4.3.0 in ./hwk4/lib/python3.11/site-packages (from gymnasium[box2d]) (4.12.2)\n",
"Requirement already satisfied: farama-notifications>=0.0.1 in ./hwk4/lib/python3.11/site-packages (from gymnasium[box2d]) (0.0.4)\n",
"Requirement already satisfied: box2d-py==2.3.5 in ./hwk4/lib/python3.11/site-packages (from gymnasium[box2d]) (2.3.5)\n",
"Requirement already satisfied: pygame>=2.1.3 in ./hwk4/lib/python3.11/site-packages (from gymnasium[box2d]) (2.6.1)\n",
"Requirement already satisfied: swig==4.* in ./hwk4/lib/python3.11/site-packages (from gymnasium[box2d]) (4.2.1.post0)\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.1.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.2\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Requirement already satisfied: matplotlib in ./hwk4/lib/python3.11/site-packages (3.9.2)\n",
"Requirement already satisfied: contourpy>=1.0.1 in ./hwk4/lib/python3.11/site-packages (from matplotlib) (1.3.0)\n",
"Requirement already satisfied: cycler>=0.10 in ./hwk4/lib/python3.11/site-packages (from matplotlib) (0.12.1)\n",
"Requirement already satisfied: fonttools>=4.22.0 in ./hwk4/lib/python3.11/site-packages (from matplotlib) (4.54.1)\n",
"Requirement already satisfied: kiwisolver>=1.3.1 in ./hwk4/lib/python3.11/site-packages (from matplotlib) (1.4.7)\n",
"Requirement already satisfied: numpy>=1.23 in ./hwk4/lib/python3.11/site-packages (from matplotlib) (2.1.2)\n",
"Requirement already satisfied: packaging>=20.0 in ./hwk4/lib/python3.11/site-packages (from matplotlib) (24.1)\n",
"Requirement already satisfied: pillow>=8 in ./hwk4/lib/python3.11/site-packages (from matplotlib) (11.0.0)\n",
"Requirement already satisfied: pyparsing>=2.3.1 in ./hwk4/lib/python3.11/site-packages (from matplotlib) (3.2.0)\n",
"Requirement already satisfied: python-dateutil>=2.7 in ./hwk4/lib/python3.11/site-packages (from matplotlib) (2.9.0.post0)\n",
"Requirement already satisfied: six>=1.5 in ./hwk4/lib/python3.11/site-packages (from python-dateutil>=2.7->matplotlib) (1.16.0)\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.1.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.2\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Requirement already satisfied: tqdm in ./hwk4/lib/python3.11/site-packages (4.66.5)\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.1.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.2\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n"
]
}
],
"source": [
"!pip install gymnasium[classic-control]\n",
"!pip install gymnasium[box2d]\n",
"!pip install matplotlib\n",
"!pip install tqdm"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "338bbbea-1ad4-4a5e-b660-46f57674d707",
"metadata": {},
"outputs": [],
"source": [
"import gymnasium as gym\n",
"import numpy as np\n",
"import random\n",
"from tqdm import tqdm\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"id": "8ceca2fd-f2bd-4840-9d95-8f2ecbca1841",
"metadata": {},
"source": [
"## Task 2: Look at the gymnasium documentation\n",
"\n",
"There is nothing to turn in for this task.\n",
"\n",
"Gymnasium is a package that has a uniform interface to a variety of domains where RL can be used. If your code works for one of them, it will (in theory) work for all of them. The top-level documentation is here:\n",
"\n",
"https://gymnasium.farama.org/index.html\n",
"\n",
"We'll work with 3 domains. Read the documentation for each of them:\n",
"* Mountain car - https://gymnasium.farama.org/environments/classic_control/mountain_car/\n",
"* Acrobot - https://gymnasium.farama.org/environments/classic_control/acrobot/\n",
"* Lunar lander - https://gymnasium.farama.org/environments/box2d/lunar_lander/\n"
]
},
{
"cell_type": "markdown",
"id": "53008c54-b793-49b2-b1b4-6d6e79603a56",
"metadata": {},
"source": [
"Each of the domains produces observations that are vectors of real values. For example, as the documentation for the Mountain Car domain says the state has two real values:\n",
"* The position of the car on the x axis\n",
"* The velocity of the car\n",
"\n",
"The class below converts real-valued vectors into discrete values. You will experiment with the impacts of using coarse or fine discretization. To turn a given observation that is a real-valued vector into a discrete state, the class below divides the range of each variable into a set of uniformly sized, non-overlapping bins.\n",
"\n",
"For example, for the Mountain Car the smallest and largest values of x are -1.2 and 0.6, respectively. If you select 5 bins, the size of each bin will be (0.6 - -1.2)/5 = 0.36. They span the following ranges, which get mapped to distinct integers as shown below:\n",
"* [-1.2, -0.84) -> 0\n",
"* [-0.84, -0.48) -> 1\n",
"* [-0.48, -0.12) -> 2\n",
"* [-0.12, 0.24) -> 3\n",
"* [0.24, 0.6] -> 4\n",
"\n",
"Each element of an observation gets mapped like this, and the resulting string of digits becomes a key to map to the corresponding discrete state. Each time a new key is found (i.e., the system finds itself in a discrete state that it has never seen before) it is mapped to an integer that can be used to index into the Q-table.\n",
"\n",
"Note that when the number of bins is small, the system treats lots of underlying observations as the same state. When the number of bins is large, the system can make more fine distinctions but the Q-table gets to be large and you'll need more experience in the domain to learn about all of those states. You will explore that tradeoff below."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "b0b3bdcc-1a2e-4420-995b-6f423c69e5fb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([-0.5518799, 0. ], dtype=float32)"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Let's look at an observation in the mountain car domain\n",
"env = gym.make(\"MountainCar-v0\", render_mode=None)\n",
"observation, info = env.reset()\n",
"observation"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "447e55ee-8984-46ac-8ec7-5706c52645b3",
"metadata": {},
"outputs": [],
"source": [
"class Discrete:\n",
" \n",
" def __init__(self, env, nbins):\n",
" \"\"\"\n",
" Arguments:\n",
" env - A Gymnasium environment that was created by a call to gym.make()\n",
" nbins - If this is an integer, then each of the elements of an observation\n",
" are mapped into nbins non-overlapping intervals whose size is\n",
" (high - low) / nbins. If this is a list, then the list must be the\n",
" same size as an observation and each element of the list specifies the\n",
" number of bins for the corresponding element of an observation.\n",
" This makes it possible to use different numbers of bins for \n",
" different elements of an observation.\n",
" \"\"\"\n",
" \n",
" nobs = env.observation_space.shape[0]\n",
" if type(nbins) == int:\n",
" nbins = [nbins] * nobs\n",
" else:\n",
" assert len(nbins) == nobs, \"You must supply %d bin values\" % nobs\n",
" self.env = env\n",
" self.nbins = nbins\n",
" self.widths = []\n",
" for low, high, n in zip(env.observation_space.low,\n",
" env.observation_space.high,\n",
" nbins):\n",
" self.widths.append((high-low)/n)\n",
"\n",
" self.state_map = {}\n",
"\n",
" \n",
" def size(self):\n",
" \"\"\"\n",
" Return the size of the state space.\n",
" \"\"\"\n",
" \n",
" return np.prod(self.nbins)\n",
"\n",
" \n",
" def discretize(self, obs):\n",
" \"\"\"\n",
" Return the discrete state to which an observation corresponds.\n",
" \"\"\"\n",
" \n",
" state = '' \n",
" for value, low, width in zip(obs, env.observation_space.low, self.widths):\n",
" state += '%d' % ((value - low)/width)\n",
" if state not in self.state_map:\n",
" self.state_map[state] = len(self.state_map)\n",
" return self.state_map[state]"
]
},
{
"cell_type": "markdown",
"id": "0986ccce-9e56-484d-8145-4338c71e541d",
"metadata": {},
"source": [
"## Task 3: Experiment with different discretization granularities\n",
"\n",
"There is nothing to turn in for this task.\n",
"\n",
"Below is an example of running the Mountain Car environment for a few steps and printing out the observation and state. Note what happens when the number of bins is 10 in terms of which states are visited. Change it to other values, higher and lower, and see how the states change in terms of granularity."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "9fb8c83a-0a78-419e-bc79-4c184d0cb79a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of distinct states = 4\n",
"State = 0, Observation = [-0.4390875 -0.01037588]\n",
"State = 0, Observation = [-0.45009047 -0.01100294]\n",
"State = 0, Observation = [-0.46064025 -0.0105498 ]\n",
"State = 0, Observation = [-0.47165945 -0.01101919]\n",
"State = 0, Observation = [-0.4840666 -0.01240716]\n",
"State = 0, Observation = [-0.49576956 -0.01170295]\n",
"State = 0, Observation = [-0.507681 -0.01191143]\n",
"State = 0, Observation = [-0.51871175 -0.01103077]\n",
"State = 0, Observation = [-0.5297792 -0.01106742]\n",
"State = 0, Observation = [-0.5408002 -0.01102107]\n",
"State = 0, Observation = [-0.55269235 -0.01189211]\n",
"State = 0, Observation = [-0.56536657 -0.01267419]\n",
"State = 0, Observation = [-0.5767283 -0.01136175]\n",
"State = 0, Observation = [-0.58869326 -0.01196496]\n",
"State = 0, Observation = [-0.6001731 -0.01147985]\n",
"State = 0, Observation = [-0.6110837 -0.01091058]\n",
"State = 0, Observation = [-0.62134564 -0.01026195]\n",
"State = 0, Observation = [-0.62988496 -0.00853931]\n",
"State = 0, Observation = [-0.6386406 -0.00875561]\n",
"State = 0, Observation = [-0.6465504 -0.00790982]\n",
"State = 0, Observation = [-0.65355885 -0.00700845]\n",
"State = 0, Observation = [-0.6606171 -0.00705826]\n",
"State = 0, Observation = [-0.6656764 -0.00505931]\n",
"State = 0, Observation = [-0.6707021 -0.00502571]\n",
"State = 0, Observation = [-0.67366 -0.00295789]\n",
"State = 0, Observation = [-0.6745301 -0.00087007]\n",
"State = 1, Observation = [-6.7430645e-01 2.2363162e-04]\n",
"State = 1, Observation = [-6.7399061e-01 3.1582214e-04]\n",
"State = 1, Observation = [-0.6715847 0.00240588]\n",
"State = 1, Observation = [-0.66910505 0.00247967]\n",
"State = 1, Observation = [-0.6665684 0.00253664]\n",
"State = 1, Observation = [-0.6619921 0.00457634]\n",
"State = 1, Observation = [-0.65540737 0.00658473]\n",
"State = 1, Observation = [-0.64785963 0.00754773]\n",
"State = 1, Observation = [-0.6394014 0.00845825]\n",
"State = 1, Observation = [-0.63009197 0.0093094 ]\n",
"State = 1, Observation = [-0.6189974 0.01109459]\n",
"State = 1, Observation = [-0.60619706 0.01280035]\n",
"State = 1, Observation = [-0.59378356 0.01241351]\n",
"State = 1, Observation = [-0.5818475 0.01193602]\n",
"State = 1, Observation = [-0.56947684 0.01237066]\n",
"State = 1, Observation = [-0.55776316 0.01171366]\n",
"State = 1, Observation = [-0.5457937 0.01196945]\n",
"State = 1, Observation = [-0.5326579 0.01313579]\n",
"State = 1, Observation = [-0.5204542 0.01220372]\n",
"State = 1, Observation = [-0.5082741 0.01218014]\n",
"State = 1, Observation = [-0.49620885 0.01206525]\n",
"State = 1, Observation = [-0.4853488 0.01086005]\n",
"State = 1, Observation = [-0.47377497 0.01157381]\n",
"State = 1, Observation = [-0.46257347 0.01120152]\n"
]
}
],
"source": [
"disc = Discrete(env, 2) # <--- Change the 10 to other values and explore\n",
"\n",
"print(\"Number of distinct states = %d\" % disc.size())\n",
"\n",
"for _ in range(50):\n",
" action = env.action_space.sample()\n",
" observation, reward, terminated, truncated, info = env.step(action)\n",
" state = disc.discretize(observation)\n",
" print ('State = %s, Observation = %s' % (state, observation))"
]
},
{
"cell_type": "markdown",
"id": "1e1bb9d1-57f6-4ef9-ae71-49085c0bec9f",
"metadata": {},
"source": [
"## Task 4: Finish the Q-learner\n",
"\n",
"The code that you write for this task will be part of your grade on this assignment.\n",
"\n",
"Below is a Q-learner class. It has an init() method and a method for choosing the greedy action. You'll need to\n",
"* add a method for choosing an epsilon-greedy action\n",
"* add a method for performing a Q update\n",
"\n",
"I've provided stubs for those methods. Recall that the epsilon-greedy action is one that is randomly chosen with probability epsilon and greedy with probability 1 - epsilon."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "d372467e-c9bd-462e-9eb2-c36fadba76a9",
"metadata": {},
"outputs": [],
"source": [
"class Q:\n",
"\n",
" def __init__(self, nstates, nactions):\n",
" \"\"\"\n",
" Arguments:\n",
" nstates - The number of distinct states\n",
" nactions - The number of distinct actions\n",
" \"\"\"\n",
" \n",
" self.gamma = 0.999 # Discount factor\n",
" self.alpha = 0.1 # Learning rate\n",
" self.epsilon = 0.05 # Exploration probability\n",
"\n",
" # Create a Q-table initialized to 0\n",
" self.table = np.zeros((nstates, nactions))\n",
"\n",
" \n",
" def greedy_action(self, state):\n",
" \"\"\"\n",
" Return the greedy action for a state. If multiple actions have the \n",
" same highest Q-value, choose one of them at random.\n",
"\n",
" Arguments:\n",
" state - The state in which the action is to be taken\n",
"\n",
" Returns: The optimal action\n",
" \"\"\"\n",
" \n",
" qmax = self.table[state].max()\n",
" greedy = [idx for idx, value in enumerate(self.table[state]) if value == qmax]\n",
" return random.choice(greedy)\n",
"\n",
"\n",
" ###\n",
" ### You need to write this\n",
" ###\n",
" def get_action(self, state):\n",
" \"\"\"\n",
" Choose an action that is epsilon greedy in a given state.\n",
"\n",
" Arguments:\n",
" state - The state in which the action is to be taken\n",
"\n",
" Returns: The action\n",
"\n",
" \"\"\"\n",
" \n",
" pass\n",
" \n",
"\n",
" ###\n",
" ### You need to write this\n",
" ###\n",
" def update(self, state1, action, reward, state2):\n",
" \"\"\"\n",
" Given that an action was taken in state 1, leading to a specific reward and\n",
" a transition to state 2, perform one update on the Q-table.\n",
" \"\"\"\n",
" \n",
" pass "
]
},
{
"cell_type": "markdown",
"id": "8751d562-e04b-47dd-8894-da7eec6af290",
"metadata": {},
"source": [
"## Task 5: Watch a domain run\n",
"\n",
"There is nothing to turn in for this task.\n",
"\n",
"The function below allows you to run a domain using a Q-table for action selection and see what is going on. Try calling it with each of the domains below to see them in action. Note that the Q-table is initialized to all zeros so the greedy action is random.\n",
"\n",
"When you run the function you should see a window pop up with a visualization of the domain."
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "4ab9f9dd-e8ab-4f0a-a73b-9fc21d84b2ea",
"metadata": {},
"outputs": [],
"source": [
"MOUNTAIN_CAR = \"MountainCar-v0\"\n",
"ACROBOT = \"Acrobot-v1\"\n",
"LUNAR_LANDER = \"LunarLander-v3\""
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "01d54ffa-8629-45af-8633-2d9a186018a3",
"metadata": {},
"outputs": [],
"source": [
"def run_domain(env, q, disc, steps):\n",
" \"\"\"\n",
" Arguments:\n",
" env - A Gymnasium enviroment that was created with gym.make()\n",
" q - A Q instance\n",
" disc - A Discrete instance\n",
" steps - The number of steps for which to run the domain\n",
" \"\"\"\n",
" \n",
" observation, info = env.reset()\n",
"\n",
" for _ in tqdm(range(steps)):\n",
" state = disc.discretize(observation)\n",
" action = q.greedy_action(state) \n",
" observation, reward, terminated, truncated, info = env.step(action)\n",
" if terminated or truncated:\n",
" observation, info = env.reset()"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "c96bfb4a-f27e-4b00-8e01-a98380a62fa2",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-10-24 10:07:25.313 Python[67322:20475503] WARNING: Secure coding is not enabled for restorable state! Enable secure coding by implementing NSApplicationDelegate.applicationSupportsSecureRestorableState: and returning YES.\n",
"100%|█████████████████████████████████████████| 500/500 [00:16<00:00, 29.62it/s]\n"
]
}
],
"source": [
"# Create a domain - NOTE that using render_mode \"human\" visualizes the domain\n",
"env = gym.make(MOUNTAIN_CAR, render_mode=\"human\")\n",
"\n",
"# Create an object to discretize it\n",
"disc = Discrete(env, 10)\n",
"\n",
"# Create a Q-learner\n",
"q = Q(disc.size(), env.action_space.n)\n",
"\n",
"# Run the domain\n",
"run_domain(env, q, disc, 500)"
]
},
{
"cell_type": "markdown",
"id": "7420ba13-4a2b-4f2d-b360-908b483bdd25",
"metadata": {},
"source": [
"## Task 6: Implement Q-learning and test it on the Mountain Car domain\n",
"\n",
"Your code for Q-learning will be part of your grade for this homework.\n",
"\n",
"The Mountain Car domain is the easiest one so you should start there. I've found that using the default parameters in the Q class, nbins = 30, and 500K steps you can learn an optimal policy.\n",
"\n",
"I've written a stub for the Q-learning function below that you can fill in.\n",
"\n",
"Things to keep in mind:\n",
"* During training you want render_mode = None or it will be very slow\n",
"* If a step() in the domain makes terminated or truncated true, that means the episode ended and you need to reset() the domain. You can look at run_domain() above to see how I handle that."
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "77ee9259-0be9-447f-a860-ddaf587b56a5",
"metadata": {},
"outputs": [],
"source": [
"def learn_domain(env, q, disc, steps):\n",
" \"\"\"\n",
" Arguments:\n",
" env - A Gymnasium enviroment that was created with gym.make()\n",
" q - A Q instance\n",
" disc - A Discrete instance\n",
" steps - The number of steps for which to run the domain and perform Q updates\n",
" \"\"\"\n",
"\n",
" pass"
]
},
{
"cell_type": "markdown",
"id": "a9b10890-6786-4ed8-841f-d7c4e1290eed",
"metadata": {},
"source": [
"The two cells below, if your Q-learner and training function are correct, will yield optimal behavior in the Mountain Car domain."
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "b7060ac5-12a5-4bc5-a0e2-74e9249a1ed3",
"metadata": {},
"outputs": [],
"source": [
"# Create a Mountain Car with render_mode = None so that it runs fast\n",
"env = gym.make(MOUNTAIN_CAR, render_mode=None)\n",
"\n",
"# Use 30 bins for discretition of each element of the observation\n",
"disc = Discrete(env, 30)\n",
"\n",
"# Allocate a Q table\n",
"q = Q(disc.size(), env.action_space.n)\n",
"\n",
"# Learn!\n",
"learn_domain(env, q, disc, 500000)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "94e44ea4-5648-4fe9-97ae-09a5a9dcfe47",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|█████████████████████████████████████████| 500/500 [00:17<00:00, 27.85it/s]\n"
]
}
],
"source": [
"# Create a version of the domain with render_model = \"human\" so that you can watch it\n",
"env = gym.make(MOUNTAIN_CAR, render_mode=\"human\")\n",
"\n",
"# Observe the policy running\n",
"run_domain(env, q, disc, 500)"
]
},
{
"cell_type": "markdown",
"id": "b10fd00c-0456-4e3e-9643-a8be6eae26cc",
"metadata": {},
"source": [
"## Task 7: Experiment with all domaims\n",
"\n",
"You responses here will be part of your grade for this homework\n",
"\n",
"For each of the three domains:\n",
"* Describe the behavior in the domain when the action selection is random (i.e., when the Q-table is first initialized and prior to any training). This can take the form of a few sentences that explain the behavior you are seeing and why random actions would lead to that behavior.\n",
"* Experiment with a few (3) different values of nbins when discretizing, some small and some larger, and explain differences in the learned policy as manifest when you run it in \"human\" mode between the different levels of descetization. Again, describe the behavior you are seeing and why the level of discretization may have contributed to it, compared to the other behaviors you see for other levels of discretization.\n",
"* For one run in which you got the best learning results plot something that convinces me that learning occured. That could be episode length through time (e.g., the faster the Mountain Car gets to the top of the hill the shorter the episodes) or reward through time (e.g., for the Lunar Lander). Explain how the plot shows evidence that the system is learning."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e0dc39c0-ae58-4ee0-bf2a-e5e9e5d9c92f",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "b2f4f180-f7b9-4237-a6ca-e28695a85576",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}